A One-Layer Recurrent Neural Network for Non-smooth Convex Optimization Subject to Linear Equality Constraints
نویسندگان
چکیده
In this paper, a one-layer recurrent neural network is proposed for solving non-smooth convex optimization problems with linear equality constraints. Comparing with the existing neural networks, the proposed neural network has simpler architecture and the number of neurons is the same as that of decision variables in the optimization problems. The global convergence of the neural network can be guaranteed if the non-smooth objective function is convex. Simulation results are provided to show that the state trajectories of the neural network can converge to the optimal solutions of the non-smooth convex optimization problems and show the performance of the proposed neural network.
منابع مشابه
An efficient one-layer recurrent neural network for solving a class of nonsmooth optimization problems
Constrained optimization problems have a wide range of applications in science, economics, and engineering. In this paper, a neural network model is proposed to solve a class of nonsmooth constrained optimization problems with a nonsmooth convex objective function subject to nonlinear inequality and affine equality constraints. It is a one-layer non-penalty recurrent neural network based on the...
متن کاملA Recurrent Neural Network for Non-smooth Convex Programming Subject to Linear Equality and Bound Constraints
In this paper, a recurrent neural network model is proposed for solving non-smooth convex programming problems, which is a natural extension of the previous neural networks. By using the non-smooth analysis and the theory of differential inclusions, the global convergence of the equilibrium is analyzed and proved. One simulation example shows the convergence of the presented neural network.
متن کاملBriefs SocietySociety Neural Networks
Many real world problems can be formulated as optimization problems with various parameters to be optimized. Some problems only have one objective to be optimized, some may have multiple objectives to be optimized at the same time and some need to be optimized subjecting to one or more constraints. Thus numerous optimization algorithms have been proposed to solve these problems. Particle Swarm ...
متن کاملA one-layer recurrent neural network for constrained pseudoconvex optimization and its application for dynamic portfolio optimization
In this paper, a one-layer recurrent neural network is proposed for solving pseudoconvex optimization problems subject to linear equality and bound constraints. Compared with the existing neural networks for optimization (e.g., the projection neural networks), the proposed neural network is capable of solving more general pseudoconvex optimization problems with equality and bound constraints. M...
متن کاملMaximisation of stability ranges for recurrent neural networks subject to on-line adaptation
We present conditions for absolute stability of recurrent neural networks with time-varying weights based on the Popov theorem from non-linear feedback system theory. We show how to maximise the stability bounds by deriving a convex optimisation problem subject to linear matrix inequality constraints, which can efficiently be solved by interior point methods with standard software.
متن کامل